Skip to main content

Train a Segmenter

This guide shows you how to set up and configure the OV20i segmentation feature to automatically detect, measure, and analyze specific features or defects in your parts. Use segmentation when you need to identify irregular shapes, measure areas, or detect specific patterns that can't be handled by simple classification.

When to Use Segmentation: Surface defects, fluid spills, irregular shapes, area measurements, pattern detection, or any features where pixel-level precision is required.

Before You Start

What You'll Need

  • OV20i camera system set up and connected
  • Test parts with features you want to segment (e.g., sheets with pencil marks)
  • Good lighting conditions for your specific application
  • 15-20 sample images for training

Step 1: Create a Segmentation Recipe

1.1 Start New Recipe

  1. Navigate to All Recipes page
  2. Click + New Recipe (top-right corner)
  3. Enter Recipe Name: Use descriptive name like "Pencil_Mark_Detection" or "Surface_Defect_Segmentation"
  4. Select Recipe Type: Choose "Segmentation" from dropdown
  5. Click OK to create

New

1.2 Activate Recipe

  1. Find your recipe in the list (shows as "Inactive")
  2. Click Actions > Activate
  3. Click Activate to confirm

Activate recipe

Result: Recipe is now active and ready for configuration.

Step 2: Access Recipe Editor

  1. Click Edit next to your active recipe
  2. Click Open Editor to confirm

Edit

You'll now see the Recipe Editor with segmentation-specific options.

Step 3: Configure Camera Settings

3.1 Open Imaging Configuration

  1. Click Configure Imaging (lower left-hand side)

Configure Image

3.2 Optimize Focus for Segmentation

Focus is critical for accurate edge detection:

  1. Position your test part in the camera view
  2. Adjust Focus until edges are sharp and clear
  3. Test with different parts to ensure consistent focus across your range
tip
  • Focus on the surface where defects/features will appear
  • Ensure the entire area of interest is in sharp focus
  • Slight over-sharpening is better than soft focus for segmentation

3.3 Set Optimal Exposure

Proper exposure ensures consistent feature detection:

  1. Adjust Exposure for balanced lighting
  2. Avoid overexposed areas (pure white regions)
  3. Ensure features are visible with good contrast

Segmentation Exposure Guidelines:

  • Features should have clear contrast with background
  • Avoid shadows that could be mistaken for defects
  • Test with various part conditions (clean, dirty, worn)

3.4 Configure LED Lighting Pattern

Choose lighting based on what you're segmenting:

Feature TypeRecommended LightingWhy
Surface defectsBright fieldEven illumination shows surface irregularities
Scratches/cracksSide lightingCreates shadows that highlight linear defects
Raised featuresDark fieldMakes raised areas stand out from background
Liquid spillsSide lightingShows surface texture differences

3.5 Adjust Gamma for Feature Enhancement

  1. Increase Gamma to enhance contrast between features and background
  2. Test different values while viewing your target features
  3. Find setting that makes features most distinguishable

3.6 Save Configuration

  1. Review settings in live preview
  2. Click Save Imaging Settings

Save Settings

Checkpoint: Features should be clearly visible with good contrast.

Step 4: Set Up Template and Alignment

4.1 Navigate to Template Section

Click "Template Image and Alignment" in breadcrumb menu

4.2 Configure Alignment (Optional)

Template and alignment

For this example, we'll skip alignment:

  1. Select Skip Aligner if parts are consistently positioned
  2. Click Save

Template image

When to Use Aligner: Enable when parts arrive in varying positions or orientations that would affect segmentation accuracy.

Step 5: Define Inspection Region

5.1 Navigate to Inspection Setup

Click "Inspection Setup" in breadcrumb menu

5.2 Set Region of Interest (ROI)

The ROI defines where segmentation will occur:

  1. Position a test part in camera view
  2. Drag ROI corners to frame the inspection area
  3. Size ROI appropriately:
    • Include all areas where features might appear
    • Exclude unnecessary background regions
    • Leave small buffer around expected feature locations

ROI Setup

5.3 ROI Best Practices for Segmentation

DoDon't
Cover entire inspection surfaceInclude irrelevant background objects
Leave buffer space around edgesMake ROI too small for feature variation
Consider part positioning variationOverlap with fixtures or tooling
Test with largest expected featuresInclude areas with permanent markings

5.4 Save ROI Settings

  1. Verify ROI covers all target areas
  2. Click Save

Step 6: Label Training Data

6.1 Navigate to Label And Train

Click "Label And Train" in breadcrumb menu

6.2 Configure Inspection Class

  1. Click Edit under Inspection Types
  2. Rename class to match your feature (e.g., "Pencil Mark", "Surface Defect", "Spill Area")
  3. Choose class color for visual identification
  4. Save changes

6.3 Capture and Label Training Images

You need minimum 10 labeled images, but 15-20 is recommended:

Image Capture Process

Label and Train

  1. Place first test part in inspection area
  2. Take image using camera interface
  3. Use Brush tool to paint over target features
  4. Paint accurately:
    • Cover entire feature area
    • Stay within feature boundaries
    • Don't paint background areas
    • Use consistent labeling approach
  5. Click Save Annotations
  6. Repeat with next part

Labeling Best Practices

Good LabelingPoor Labeling
Precise feature boundariesSloppy edge painting
Consistent feature definitionInconsistent criteria
Complete feature coverageMissing feature areas
Clean background (unpainted)Accidental background painting

6.4 Training Data Variety

Ensure your training set includes:

  • Different feature sizes
  • Various feature intensities
  • Multiple locations within ROI
  • Different lighting conditions (if applicable)
  • Edge cases and borderline examples

6.5 Quality Check Training Data

  1. Review all labeled images
  2. Verify consistent labeling approach
  3. Remove any incorrectly labeled examples
  4. Add more examples if needed

Step 7: Train Segmentation Model

7.1 Start Training Process

  1. Click Return to Live when labeling is complete
  2. Click Train Segmentation Model

Start Training

7.2 Configure Training Parameters

  1. Set Number of Iterations:
    • Fast training: 50-100 iterations (5-10 minutes)
    • Production quality: 200-500 iterations (15-30 minutes)
    • High precision: 500+ iterations (30+ minutes)
  2. Click Start Training

7.3 Monitor Training Progress

Training progress shows:

  • Current iteration number
  • Training accuracy percentage
  • Estimated completion time

Training

Training Controls:

  • Abort Training: Stop if issues arise
  • Finish Training Early: Stop when accuracy is sufficient

Training 2

tip
  • 85% accuracy typically good for production
  • Training automatically stops at target accuracy
  • More training data often better than more iterations

Step 8: Test Segmentation Performance

8.1 Access Live Preview

  1. Click Live Preview after training completes
  2. Test with various parts:
    • Known good parts (should show no/minimal segmentation)
    • Known defective parts (should highlight defects)
    • Edge cases and borderline examples

Live preview

8.2 Evaluate Results

Check segmentation quality:

MetricGood PerformanceNeeds Improvement
AccuracyFinds real features consistentlyMisses obvious features
PrecisionFew false positivesMany background areas highlighted
Edge QualityClean, accurate boundariesRough or inaccurate edges
ConsistencySimilar results on repeat testsHighly variable results

8.3 Troubleshooting Poor Results

ProblemLikely CauseSolution
Missing featuresInsufficient training dataAdd more labeled examples
False positivesPoor lighting/contrastImprove imaging settings
Rough edgesPoor image qualityImprove focus/lighting
Inconsistent resultsInadequate training varietyAdd more diverse examples

Step 9: Configure Pass/Fail Logic

9.1 Access IO Block

  1. Ensure AI model shows green (trained status)
  2. Navigate to IO Block via breadcrumb menu

9.2 Remove Default Logic

  1. Delete Classification Block Logic node
  2. Prepare to build custom segmentation logic

9.3 Build Segmentation Flow

Create Node-RED flow with these components:

  1. Drag nodes from left panel:
    • Function node (for logic)
    • Debug node (for testing)
    • Final Pass/Fail node
  2. Connect nodes with wires

NodeRed

9.4 Configure Logic Based on Your Needs

Option A: Pass if No Defects Detected

Use Case: Quality inspection where any detected feature is a fail

Function Node Code:

const allBlobs = msg.payload.segmentation.blobs;
const results = allBlobs.length < 1; // Pass if no features found
msg.payload = results;
return msg;

Option B: Pass if Small Defects Only

Use Case: Accept minor defects below size threshold

Function Node Code:

const threshold = 500; // Adjust pixel count threshold
const allBlobs = msg.payload.segmentation.blobs;
const allUnderThreshold = allBlobs.every(blob => blob.pixel_count < threshold);
msg.payload = allUnderThreshold;
return msg;

Option C: Pass if Total Defect Area is Small

Use Case: Accept parts with limited total defect area

Function Node Code:

const threshold = 5000; // Adjust total pixel threshold
const allBlobs = msg.payload.segmentation.blobs;
const totalArea = allBlobs.reduce((sum, blob) => sum + blob.pixel_count, 0);
msg.payload = totalArea < threshold;
return msg;

9.5 Configure Function Node

  1. Double-click Function node
  2. Copy appropriate code from examples above
  3. Paste into "On Message" tab
  4. Adjust threshold values for your application
  5. Click Done

9.6 Deploy and Test Logic

  1. Click Deploy to activate logic
  2. Navigate to HMI for testing
  3. Test with known good and bad parts
  4. Verify pass/fail results match expectations

Step 10: Production Validation

10.1 Comprehensive Testing

Test segmentation system with:

Test CaseExpected ResultAction if Failed
Clean partsPass (no segmentation)Adjust thresholds or retrain
Minor defectsPass/Fail per your criteriaRefine logic parameters
Major defectsFail (clear segmentation)Check model accuracy
Edge casesConsistent behaviorAdd training data

10.2 Performance Validation

Monitor these metrics:

  • Processing time per inspection
  • Consistency across multiple tests
  • Accuracy with production lighting
  • Reliability over extended operation

10.3 Final Adjustments

If performance isn't satisfactory:

  1. Add more training data for edge cases
  2. Adjust threshold values in logic
  3. Improve imaging conditions
  4. Retrain model with additional iterations

Success! Your Segmentation System is Ready

You now have a working segmentation system that can:

  • Automatically detect specific features or defects
  • Measure areas with pixel-level precision
  • Apply custom pass/fail logic based on your requirements
  • Integrate with production systems via I/O controls

Advanced Configuration Options

Custom Threshold Logic

For complex acceptance criteria, combine multiple conditions:

const smallThreshold = 200;
const largeThreshold = 1000;
const maxTotalArea = 3000;

const allBlobs = msg.payload.segmentation.blobs;
const smallBlobs = allBlobs.filter(blob => blob.pixel_count < smallThreshold);
const largeBlobs = allBlobs.filter(blob => blob.pixel_count > largeThreshold);
const totalArea = allBlobs.reduce((sum, blob) => sum + blob.pixel_c